Nearly Optimal Private LASSO

نویسندگان

  • Kunal Talwar
  • Abhradeep Thakurta
  • Li Zhang
چکیده

We present a nearly optimal differentially private version of the well known LASSO estimator. Our algorithm provides privacy protection with respect to each training example. The excess risk of our algorithm, compared to the non-private version, is Õ(1/n), assuming all the input data has bounded `∞ norm. This is the first differentially private algorithm that achieves such a bound without the polynomial dependence on p under no additional assumptions on the design matrix. In addition, we show that this error bound is nearly optimal amongst all differentially private algorithms.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Differentially Private Model Selection via Stability Arguments and the Robustness of the Lasso

We design differentially private algorithms for statistical model selection. Given a data set and a large, discrete collection of “models”, each of which is a family of probability distributions, the goal is to determine the model that best “fits” the data. This is a basic problem in many areas of statistics and machine learning. We consider settings in which there is a well-defined answer, in ...

متن کامل

Differentially Private Feature Selection via Stability Arguments, and the Robustness of the Lasso

We design differentially private algorithms for statistical model selection. Given a data set and alarge, discrete collection of “models”, each of which is a family of probability distributions, the goal isto determine the model that best “fits” the data. This is a basic problem in many areas of statistics andmachine learning.We consider settings in which there is a well-defined...

متن کامل

Reweighting the Lasso

This paper investigates how changing the growth rate of the sequence of penalty weights affects the asymptotics of Lasso-type estimators. The cases of non-singular and nearly singular design are considered.

متن کامل

The Lasso with Nearly Orthogonal Latin Hypercube Designs

We consider the Lasso problem when the input values need to take multiple levels. In this situation, we propose to use nearly orthogonal Latin hypercube designs, originally motivated by computer experiments, to significantly enhance the variable selection accuracy of the Lasso. The use of such designs ensures small column-wise correlations in variable selection and gives flexibility in identify...

متن کامل

Penalized Linear Unbiased Selection

We introduce MC+, a fast, continuous, nearly unbiased, and accurate method of penalized variable selection in high-dimensional linear regression. The LASSO is fast and continuous, but biased. The bias of the LASSO interferes with variable selection. Subset selection is unbiased but computationally costly. The MC+ has two elements: a minimax concave penalty (MCP) and a penalized linear unbiased ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015